Goto

Collaborating Authors

 law review


The Engineer's Dilemma: A Review of Establishing a Legal Framework for Integrating Machine Learning in Construction by Navigating Precedents and Industry Expectations

Naser, M. Z.

arXiv.org Artificial Intelligence

Despite the widespread interest in machine learning (ML), the engineering industry has not yet fully adopted ML-based methods, which has left engineers and stakeholders uncertain about the legal and regulatory frameworks that govern their decisions. This gap remains unaddressed as an engineer's decision-making process, typically governed by professional ethics and practical guidelines, now intersects with complex algorithmic outputs. To bridge this gap, this paper explores how engineers can navigate legal principles and legislative justifications that support and/or contest the deployment of ML technologies. Drawing on recent precedents and experiences gained from other fields, this paper argues that analogical reasoning can provide a basis for embedding ML within existing engineering codes while maintaining professional accountability and meeting safety requirements. In exploring these issues, the discussion focuses on established liability doctrines, such as negligence and product liability, and highlights how courts have evaluated the use of predictive models. We further analyze how legislative bodies and standard-setting organizations can furnish explicit guidance equivalent to prior endorsements of emergent technologies. This exploration stresses the vitality of understanding the interplay between technical justifications and legal precedents for shaping an informed stance on ML's legitimacy in engineering practice. Finally, our analysis catalyzes a legal framework for integrating ML through which stakeholders can critically assess the responsibilities, liabilities, and benefits inherent in ML-driven engineering solutions.


Bye-bye, Bluebook? Automating Legal Procedure with Large Language Models

Dahl, Matthew

arXiv.org Artificial Intelligence

Legal practice requires careful adherence to procedural rules. In the United States, few are more complex than those found in The Bluebook: A Uniform System of Citation. Compliance with this system's 500+ pages of byzantine formatting instructions is the raison d'etre of thousands of student law review editors and the bete noire of lawyers everywhere. To evaluate whether large language models (LLMs) are able to adhere to the procedures of such a complicated system, we construct an original dataset of 866 Bluebook tasks and test flagship LLMs from OpenAI, Anthropic, Google, Meta, and DeepSeek. We show (1) that these models produce fully compliant Bluebook citations only 69%-74% of the time and (2) that in-context learning on the Bluebook's underlying system of rules raises accuracy only to 77%. These results caution against using off-the-shelf LLMs to automate aspects of the law where fidelity to procedure is paramount.


Formalising Anti-Discrimination Law in Automated Decision Systems

Sargeant, Holli, Magnusson, Måns

arXiv.org Machine Learning

Algorithmic discrimination is a critical concern as machine learning models are used in high-stakes decision-making in legally protected contexts. Although substantial research on algorithmic bias and discrimination has led to the development of fairness metrics, several critical legal issues remain unaddressed in practice. To address these gaps, we introduce a novel decision-theoretic framework grounded in anti-discrimination law of the United Kingdom, which has global influence and aligns more closely with European and Commonwealth legal systems. We propose the 'conditional estimation parity' metric, which accounts for estimation error and the underlying data-generating process, aligning with legal standards. Through a real-world example based on an algorithmic credit discrimination case, we demonstrate the practical application of our formalism and provide insights for aligning fairness metrics with legal principles. Our approach bridges the divide between machine learning fairness metrics and anti-discrimination law, offering a legally grounded framework for developing non-discriminatory automated decision systems.


Rules, Cases, and Reasoning: Positivist Legal Theory as a Framework for Pluralistic AI Alignment

Caputo, Nicholas A.

arXiv.org Artificial Intelligence

Legal theory can address two related key problems of alignment: pluralism and specification. Alignment researchers must determine how to specify what is concretely meant by vague principles like helpfulness and fairness and they must ensure that their techniques do not exclude alternative perspectives on life and values. The law faces these same problems. Leading legal theories suggest the law solves these problems through the interaction of rules and cases, where general rules promulgated by a democratic authority are given specific content through their application over time. Concrete applications allow for convergence on practical meaning while preserving space for disagreement on values. These approaches suggest improvements to existing democratic alignment processes that use AI to create cases that give content to rules, allowing for more pluralist alignment.


Liability and Insurance for Catastrophic Losses: the Nuclear Power Precedent and Lessons for AI

Trout, Cristian

arXiv.org Artificial Intelligence

As AI systems become more autonomous and capable, experts warn of them potentially causing catastrophic losses. Drawing on the successful precedent set by the nuclear power industry, this paper argues that developers of frontier AI models should be assigned limited, strict, and exclusive third party liability for harms resulting from Critical AI Occurrences (CAIOs) - events that cause or easily could have caused catastrophic losses. Mandatory insurance for CAIO liability is recommended to overcome developers' judgment-proofness, mitigate winner's curse dynamics, and leverage insurers' quasi-regulatory abilities. Based on theoretical arguments and observations from the analogous nuclear power context, insurers are expected to engage in a mix of causal risk-modeling, monitoring, lobbying for stricter regulation, and providing loss prevention guidance in the context of insuring against heavy-tail risks from AI. While not a substitute for regulation, clear liability assignment and mandatory insurance can help efficiently allocate resources to risk-modeling and safe design, facilitating future regulatory efforts.


Not All Similarities Are Created Equal: Leveraging Data-Driven Biases to Inform GenAI Copyright Disputes

Hacohen, Uri, Haviv, Adi, Sarfaty, Shahar, Friedman, Bruria, Elkin-Koren, Niva, Livni, Roi, Bermano, Amit H

arXiv.org Artificial Intelligence

The advent of Generative Artificial Intelligence (GenAI) models, including GitHub Copilot, OpenAI GPT, and Stable Diffusion, has revolutionized content creation, enabling non-professionals to produce high-quality content across various domains. This transformative technology has led to a surge of synthetic content and sparked legal disputes over copyright infringement. To address these challenges, this paper introduces a novel approach that leverages the learning capacity of GenAI models for copyright legal analysis, demonstrated with GPT2 and Stable Diffusion models. Copyright law distinguishes between original expressions and generic ones (Sc\`enes \`a faire), protecting the former and permitting reproduction of the latter. However, this distinction has historically been challenging to make consistently, leading to over-protection of copyrighted works. GenAI offers an unprecedented opportunity to enhance this legal analysis by revealing shared patterns in preexisting works. We propose a data-driven approach to identify the genericity of works created by GenAI, employing "data-driven bias" to assess the genericity of expressive compositions. This approach aids in copyright scope determination by utilizing the capabilities of GenAI to identify and prioritize expressive elements and rank them according to their frequency in the model's dataset. The potential implications of measuring expressive genericity for copyright law are profound. Such scoring could assist courts in determining copyright scope during litigation, inform the registration practices of Copyright Offices, allowing registration of only highly original synthetic works, and help copyright owners signal the value of their works and facilitate fairer licensing deals. More generally, this approach offers valuable insights to policymakers grappling with adapting copyright law to the challenges posed by the era of GenAI.


Is the U.S. Legal System Ready for AI's Challenges to Human Values?

Cheong, Inyoung, Caliskan, Aylin, Kohno, Tadayoshi

arXiv.org Artificial Intelligence

Our interdisciplinary study investigates how effectively U.S. laws confront the challenges posed by Generative AI to human values. Through an analysis of diverse hypothetical scenarios crafted during an expert workshop, we have identified notable gaps and uncertainties within the existing legal framework regarding the protection of fundamental values, such as privacy, autonomy, dignity, diversity, equity, and physical/mental well-being. Constitutional and civil rights, it appears, may not provide sufficient protection against AI-generated discriminatory outputs. Furthermore, even if we exclude the liability shield provided by Section 230, proving causation for defamation and product liability claims is a challenging endeavor due to the intricate and opaque nature of AI systems. To address the unique and unforeseeable threats posed by Generative AI, we advocate for legal frameworks that evolve to recognize new threats and provide proactive, auditable guidelines to industry stakeholders. Addressing these issues requires deep interdisciplinary collaborations to identify harms, values, and mitigation strategies.


Antitrust and Artificial Intelligence (AAI): Antitrust Vigilance Lifecycle and AI Legal Reasoning Autonomy

Eliot, Lance

arXiv.org Artificial Intelligence

There is an increasing interest in the entwining of the field of antitrust with the field of Artificial Intelligence (AI), frequently referred to jointly as Antitrust and AI (AAI) in the research literature. This study focuses on the synergies entangling antitrust and AI, doing so to extend the literature by proffering the primary ways that these two fields intersect, consisting of: (1) the application of antitrust to AI, and (2) the application of AI to antitrust. To date, most of the existing research on this intermixing has concentrated on the former, namely the application of antitrust to AI, entailing how the marketplace will be altered by the advent of AI and the potential for adverse antitrust behaviors arising accordingly. Opting to explore more deeply the other side of this coin, this research closely examines the application of AI to antitrust and establishes an antitrust vigilance lifecycle to which AI is predicted to be substantively infused for purposes of enabling and bolstering antitrust detection, enforcement, and post-enforcement monitoring. Furthermore, a gradual and incremental injection of AI into antitrust vigilance is anticipated to occur as significant advances emerge amidst the Levels of Autonomy (LoA) for AI Legal Reasoning (AILR).


Legal Sentiment Analysis and Opinion Mining (LSAOM): Assimilating Advances in Autonomous AI Legal Reasoning

Eliot, Lance

arXiv.org Artificial Intelligence

An expanding field of substantive interest for the theory of the law and the practice-of-law entails Legal Sentiment Analysis and Opinion Mining (LSAOM), consisting of two often intertwined phenomena and actions underlying legal discussions and narratives: (1) Sentiment Analysis (SA) for the detection of expressed or implied sentiment about a legal matter within the context of a legal milieu, and (2) Opinion Mining (OM) for the identification and illumination of explicit or implicit opinion accompaniments immersed within legal discourse. Efforts to undertake LSAOM have historically been performed by human hand and cognition, and only thinly aided in more recent times by the use of computer-based approaches. Advances in Artificial Intelligence (AI) involving especially Natural Language Processing (NLP) and Machine Learning (ML) are increasingly bolstering how automation can systematically perform either or both of Sentiment Analysis and Opinion Mining, all of which is being inexorably carried over into engagement within a legal context for improving LSAOM capabilities. This research paper examines the evolving infusion of AI into Legal Sentiment Analysis and Opinion Mining and proposes an alignment with the Levels of Autonomy (LoA) of AI Legal Reasoning (AILR), plus provides additional insights regarding AI LSAOM in its mechanizations and potential impact to the study of law and the practicing of law.


Legal Judgment Prediction (LJP) Amid the Advent of Autonomous AI Legal Reasoning

Eliot, Lance

arXiv.org Artificial Intelligence

Legal Judgment Prediction (LJP) is a longstanding and open topic in the theory and practice-of-law. Predicting the nature and outcomes of judicial matters is abundantly warranted, keenly sought, and vigorously pursued by those within the legal industry and also by society as a whole. The tenuous act of generating judicially laden predictions has been limited in utility and exactitude, requiring further advancement. Various methods and techniques to predict legal cases and judicial actions have emerged over time, especially arising via the advent of computer-based modeling. There has been a wide range of approaches attempted, including simple calculative methods to highly sophisticated and complex statistical models. Artificial Intelligence (AI) based approaches have also been increasingly utilized. In this paper, a review of the literature encompassing Legal Judgment Prediction is undertaken, along with innovatively proposing that the advent of AI Legal Reasoning (AILR) will have a pronounced impact on how LJP is performed and its predictive accuracy. Legal Judgment Prediction is particularly examined using the Levels of Autonomy (LoA) of AI Legal Reasoning, plus, other considerations are explored including LJP probabilistic tendencies, biases handling, actor predictors, transparency, judicial reliance, legal case outcomes, and other crucial elements entailing the overarching legal judicial milieu.